# Korean sentence embedding

Gte Base Ko
A sentence embedding model fine-tuned on the Korean triplet dataset based on the Alibaba-NLP/gte-multilingual-base model for semantic similarity calculation
Text Embedding Supports Multiple Languages
G
juyoungml
18
2
E5 Large Korean
MIT
A Korean sentence embedding model fine-tuned based on multilingual-e5-large, supporting 1024-dimensional vector space mapping, suitable for tasks such as semantic similarity calculation
Text Embedding Transformers Supports Multiple Languages
E
upskyy
2,222
2
E5 Small Korean
MIT
A Korean sentence embedding model fine-tuned from intfloat/multilingual-e5-small, supporting 384-dimensional vector representation, suitable for tasks like semantic similarity calculation
Text Embedding Transformers Supports Multiple Languages
E
upskyy
2,510
2
Bge M3 Korean
A Korean-optimized sentence embedding model based on BAAI/bge-m3, supporting 1024-dimensional vector representation, suitable for tasks like semantic similarity calculation
Text Embedding Transformers Supports Multiple Languages
B
upskyy
7,823
51
Kf Deberta Multitask
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Transformers Korean
K
upskyy
1,866
15
Kosimcse Bert Multitask
KoSimCSE-BERT-multitask is a high-performance Korean sentence embedding model based on BERT architecture and optimized with multi-task learning strategy, specifically designed for generating high-quality Korean sentence embeddings.
Text Embedding Transformers Korean
K
BM-K
827
8
Sentence Transformers Klue Bert Base
This is a sentence-transformers model based on KLUE BERT-base, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space.
Text Embedding Transformers
S
hunkim
119
0
Ko Sroberta Nli
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space.
Text Embedding Korean
K
jhgan
3,840
8
Ko Sroberta Multitask
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Korean
K
jhgan
162.23k
115
Ko Sbert Nli
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space.
Text Embedding
K
jhgan
18.99k
21
Klue Sentence Roberta Base Kornlu
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional vector space, suitable for tasks such as semantic search and clustering.
Text Embedding Transformers
K
bespin-global
13
0
Ko Sbert Multitask
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space.
Text Embedding
K
jhgan
7,030
17
Ko Sbert Sts
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding
K
jhgan
175.93k
9
Ko Sroberta Sts
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space.
Text Embedding
K
jhgan
86
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase